45,850 research outputs found

    How to get a conservative well-posed linear system out of thin air. Part II. Controllability and stability

    No full text
    Published versio

    Cross-shaped and Degenerate Singularities in an Unstable Elliptic Free Boundary Problem

    Get PDF
    We investigate singular and degenerate behavior of solutions of the unstable free boundary problem Ī”u=āˆ’Ļ‡{u>0}.\Delta u = -\chi_{\{u>0\}} . First, we construct a solution that is not of class C1,1C^{1,1} and whose free boundary consists of four arcs meeting in a {\em cross}-shaped singularity. This solution is completely unstable/repulsive from above and below which would make it hard to get by the usual methods, and even numerics is non-trivial. We also show existence of a degenerate solution. This answers two of the open questions in a recent paper by R. Monneau-G.S. Weiss

    Self-propagating High temperature Synthesis (SHS) in the high activation energy regime

    Get PDF
    We derive the precise limit of SHS in the high activation energy scaling suggested by B.J. Matkowksy-G.I. Sivashinsky in 1978 and by A. Bayliss-B.J. Matkowksy-A.P. Aldushin in 2002. In the time-increasing case the limit turns out to be the Stefan problem for supercooled water with spatially inhomogeneous coefficients. Although the present paper leaves open mathematical questions concerning the convergence, our precise form of the limit problem suggest a strikingly simple explanation for the numerically observed pulsating waves

    Learning When Training Data are Costly: The Effect of Class Distribution on Tree Induction

    Full text link
    For large, real-world inductive learning problems, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the training examples and/or the computational costs associated with learning from them. In such circumstances, one question of practical importance is: if only n training examples can be selected, in what proportion should the classes be represented? In this article we help to answer this question by analyzing, for a fixed training-set size, the relationship between the class distribution of the training data and the performance of classification trees induced from these data. We study twenty-six data sets and, for each, determine the best class distribution for learning. The naturally occurring class distribution is shown to generally perform well when classifier performance is evaluated using undifferentiated error rate (0/1 loss). However, when the area under the ROC curve is used to evaluate classifier performance, a balanced distribution is shown to perform well. Since neither of these choices for class distribution always generates the best-performing classifier, we introduce a budget-sensitive progressive sampling algorithm for selecting training examples based on the class associated with each example. An empirical analysis of this algorithm shows that the class distribution of the resulting training set yields classifiers with good (nearly-optimal) classification performance
    • ā€¦
    corecore